Online Alternate Generator Against Adversarial Attacks
نویسندگان
چکیده
منابع مشابه
Divide, Denoise, and Defend against Adversarial Attacks
Deep neural networks, although shown to be a successful class of machine learning algorithms, are known to be extremely unstable to adversarial perturbations. Improving the robustness of neural networks against these attacks is important, especially for security-critical applications. To defend against such attacks, we propose dividing the input image into multiple patches, denoising each patch...
متن کاملDefending Non-Bayesian Learning against Adversarial Attacks
Abstract This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state. We focus on the impact of the adversarial agents on the performance of consensus-based non-Bayesian learning, where non-faulty agents combine local le...
متن کاملProtecting JPEG Images Against Adversarial Attacks
As deep neural networks (DNNs) have been integrated into critical systems, several methods to attack these systems have been developed. These adversarial attacks make imperceptible modifications to an image that fool DNN classifiers. We present an adaptive JPEG encoder which defends against many of these attacks. Experimentally, we show that our method produces images with high visual quality w...
متن کاملFault Jumping Attacks against Shrinking Generator
In this paper we outline two cryptoanalytic attacks against hardware implementation of the shrinking generator by Coppersmith et al., a classic design in low-cost, simple-design pseudorandom bitstream generator. This is a report on work on progress, since implementation and careful adjusting the attack strategy in order to optimize the atatck is still not completed.
متن کاملTowards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
Machine learning systems based on deep neural networks, being able to produce state-of-the-art results on various perception tasks, have gained mainstream adoption in many applications. However, they are shown to be vulnerable to adversarial example attack, which generates malicious output by adding slight perturbations to the input. Previous adversarial example crafting methods, however, use s...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Image Processing
سال: 2020
ISSN: 1057-7149,1941-0042
DOI: 10.1109/tip.2020.3025404